10 research outputs found
RL-based Variable Horizon Model Predictive Control of Multi-Robot Systems using Versatile On-Demand Collision Avoidance
Multi-robot systems have become very popular in recent years because of their
wide spectrum of applications, ranging from surveillance to cooperative payload
transportation. Model Predictive Control (MPC) is a promising controller for
multi-robot control because of its preview capability and ability to handle
constraints easily. The performance of the MPC widely depends on many
parameters, among which the prediction horizon is the major contributor.
Increasing the prediction horizon beyond a limit drastically increases the
computation cost. Tuning the value of the prediction horizon can be very
time-consuming, and the tuning process must be repeated for every task.
Moreover, instead of using a fixed horizon for an entire task, a better balance
between performance and computation cost can be established if different
prediction horizons can be employed for every robot at each time step. Further,
for such variable prediction horizon MPC for multiple robots, on-demand
collision avoidance is the key requirement. We propose Versatile On-demand
Collision Avoidance (VODCA) strategy to comply with the variable horizon model
predictive control. We also present a framework for learning the prediction
horizon for the multi-robot system as a function of the states of the robots
using the Soft Actor-Critic (SAC) RL algorithm. The results are illustrated and
validated numerically for different multi-robot tasks
Image Based Visual Servoing for Tumbling Objects
Objects in space often exhibit a tumbling motion around the major inertial axis. In this paper, we address the image based visual servoing of a robotic system towards an uncooperative tumbling object. In contrast to previous approaches that require explicit reconstruction of the object and an estimation of its velocity, we propose a novel controller that is able to minimize the feature error directly in image space. This is achieved by observing that the feature points on the tumbling object follow a circular path around the axis of rotation and their projection creates an elliptical track in the image plane. Our controller minimizes the error between this elliptical track and the desired features, such that at the desired pose the features lie on the circumference of the ellipse. The effectiveness of our framework is exhibited by implementing the algorithm in simulation as well on a mobile robot
Reactionless visual servoing of a multi-arm space robot combined with other manipulation tasks
This paper presents a novel and generic reactionless visual servo controller for a satellite-based multi-arm space robot. The controller is designed to complete the task of visually servoing the robot's end-effectors to a desired pose, while maintaining minimum attitude disturbance on the base-satellite. Task function approach is utilized to coordinate the servoing process and attitude of the base satellite. A redundancy formulation is used to define the tasks. The visual serving task is defined as a primary task, While regulating attitude of the base satellite to zero is defined as a secondary task. The secondary task is defined through a quadratic optimization problem, in such a way that it does not affect the primary task, and simultaneously minimizes its cost function. Stability analysis of the proposed control methodology is also discussed. A set of numerical experiments are carried out on different multi-arm space robotic systems. These systems are a planar dual-arm robot, a spatial dual-arm robot, and a three-arm planar robot. The results of the simulation experiments show efficacy, generality and applicability of the proposed control methodology. (C) 2016 Elsevier B.V. All rights reserved